---
title: Usage tab
description: Tracks prediction processing progress for use in accuracy, data drift, and predictions over time analysis.

---

# Usage tab {: #usage-tab }

After deploying a model and making predictions in production, monitoring model quality and performance over time is critical to ensure the model remains effective. This monitoring occurs on the [Data Drift](data-drift) and [Accuracy](deploy-accuracy) tabs and requires processing large amounts of prediction data. Prediction data processing can be subject to delays or rate limiting.


## Prediction Tracking chart {: #prediction-tracking-chart }

On the left side of the **Usage** tab is the **Prediction Tracking** chart, a bar chart of the prediction processing status over the last 24 hours or 7 days, tracking the number of processed, missing association ID, and rate-limited prediction rows. Depending on the selected view (24-hour or 7-day), the histogram's bins are hour-by-hour or day-by-day.

![](images/prediction-tracking-chart.png)

| |        Chart element        |   Description   |
|-|-----------------------------|-----------------|
![](images/icon-1.png) | Select time period | Selects the **Last 24 hours** or **Last 7 days** view.
![](images/icon-2.png) | Use log scaling    | Applies log scaling to the Prediction Tracking chart for deployments with more than 250,000 rows of predictions.
![](images/icon-3.png) | Time of Receiving Predictions Data <br> (X-axis) | Displays the time range (by day or hour) represented by a bin, tracking the rows of prediction data received within that range. Predictions are timestamped when a prediction is received by the system for processing. This "time received" value is not equivalent to the timestamp in service health, data drift, and accuracy. For DataRobot prediction environments, this timestamp value can be slightly later than prediction timestamp. For agent deployments, the timestamp represents when the DataRobot API received the prediction data from the agent.
![](images/icon-4.png) | Row Count <br> (Y-axis) | Displays the number of prediction rows timestamped within a bin's time range (by day or hour).
![](images/icon-5.png) | Prediction processing categories  | Displays a bar chart tracking the status of prediction rows: <ul><li>**Processed**: Tracked for drift and accuracy analysis.</li><li>**Rate Limited**: Not tracked because prediction processing exceeded the hourly rate limit.</li><li>**Missing Association ID**: Not tracked because the prediction rows don't include the association ID and drift tracking isn't enabled.</li></ul> 

!!! note
    For a monitoring agent deployment, if you [implement large-scale monitoring](agent-use#enable-large-scale-monitoring), the prediction rows won't appear in this bar chart; however, the **Predictions Processing (Champion)** delay will track the pre-aggregated data.

To view additional information on the **Prediction Tracking** chart, hover over a column to see the time range during which the predictions data was received and the number of rows that were **Processed**, **Rate Limited**, or **Missing Association ID**:

![](images/prediction-tracking-details.png)


## Prediction and actuals processing delay {: #prediction-and-actuals-processing-delay }

On the right side of the **Usage** tab are the processing delays for **Predictions Processing (Champion)** and **Actuals Processing** (the delay in actuals processing is for _ALL_ models in the deployment):

![](images/predictions-processing-delay.png)

The **Usage** tab recalculates the processing delays without reloading the page. You can check the **Updated** value to determine when the delays were last updated.
